语义通信引起了人们的兴趣,因为它可以显着减少在不丢失关键信息的情况下要传输的数据量。大多数现有作品都探索文本的语义编码和传输,并在自然语言处理(NLP)中应用技术来解释文本的含义。在本文中,我们构想了图像数据的语义通信,这些语义数据在语义和带宽敏感方面更为丰富。我们提出了一种基于增强学习的自适应语义编码(RL-ASC)方法,该方法编码超过像素级别的图像。首先,我们定义了图像数据的语义概念,该概念包括类别,空间布置和视觉特征作为表示单元,并提出卷积语义编码器以提取语义概念。其次,我们提出了图像重建标准,该标准从传统像素的相似性演变为语义相似性和感知性能。第三,我们设计了一种基于RL的新型语义位分配模型,其奖励是用自适应量化水平编码某个语义概念后的速率语义感知性能的提高。因此,与任务相关的信息得到正确保存和重建,同时丢弃了较少重要的数据。最后,我们提出了基于生成的对抗网(GAN)的语义解码器,该语义解码器通过注意模块融合本地和全球特征。实验结果表明,所提出的RL-ASC具有噪声稳定性,可以重建视觉上令人愉悦和语义一致的图像,并节省与标准编解码器和其他基于深度学习的图像编解码器相比,可以节省位置的时间。
translated by 谷歌翻译
我们介绍了一个大规模实验,该实验对编码器进行了预处理,其参数计数范围从700m到9.3b不等,随后蒸馏到较小的型号中,范围为17m-170亿参数,其应用到自然语言理解(NLU)组件(NLU)组件(虚拟助手系统。尽管我们使用70%的口语数据训练,但在对书面形式的跨语性自然语言推论(XNLI)语料库进行评估时,我们的教师模型与XLM-R和MT5相当。我们使用系统中的内域数据对教师模型进行了第二阶段的训练,以提高了3.86%的相对分类,而相对7.01%的插槽填充。我们发现,即使是从我们的2阶段教师模型中提取的170亿参数模型,与仅接受公共数据的2.3B参数老师相比,与2.3B参数老师相比,意图分类更好2.88%,并且7.69%的插槽填充错误率更好(第1阶段),强调了。内域数据对训练的重要性。当使用标记的NLU数据进行离线评估时,我们的17m参数阶段2蒸馏模型的表现分别优于XLM-R碱基(85m Params)和Distillbert(42m Params),分别优于4.23%至6.14%。最后,我们介绍了一个完整的虚拟助手实验平台的结果,在该平台中,我们发现使用经过预训练和蒸馏管道训练的模型超过了从8500万参数教师蒸馏的模型,在自动测量全系统用户不满的自动测量中,从8500万参数教师蒸馏出3.74%-4.91%。
translated by 谷歌翻译
文档级关系提取(DRE)旨在识别两个实体之间的关系。实体可以对应于超越句子边界的多个提升。以前很少有研究已经调查了提及集成,这可能是有问题的,因为库鲁弗提到对特定关系没有同样有贡献。此外,事先努力主要关注实体级的推理,而不是捕获实体对之间的全局相互作用。在本文中,我们提出了两种新颖的技术,上下文指导的集成和交互推理(CGM2IR),以改善DRE。而不是简单地应用平均池,而是利用上下文来指导在加权和方式中的经验提升的集成。另外,对实体对图的相互作用推理在实体对图上执行迭代算法,以模拟关系的相互依赖性。我们在三个广泛使用的基准数据集中评估我们的CGM2IR模型,即Docred,CDR和GDA。实验结果表明,我们的模型优于以前的最先进的模型。
translated by 谷歌翻译
作为Shannon Paradigm的突破的语义通信旨在成功传输由源传送的语义信息,而不是每种单个符号或位的准确接收,而不管其含义如何。本文提供了关于语义通信的概述。在简要审查Shannon信息理论之后,我们讨论了深入学习的理论,框架和系统设计的语义通信。不同于用于测量传统通信系统的符号/误码率,还讨论了语义通信的新性能度量。这篇文章由几个开放问题结束。
translated by 谷歌翻译
人工智能(AI)应用的兴趣正在推动无线网络的进一步演变。它已经设想,6G将是转型性的,并将彻底改变无线从“连接的东西”到“连接智能”的演变。然而,最先进的深度学习和基于大数据分析的AI系统需要巨大的计算和通信资源,这在训练和推理过程中都会导致显着的延迟,能耗,网络拥塞和隐私泄漏。通过将模型训练和推理能力嵌入到网络边缘中,Edge AI突出了6G的中断技术,以便于无缝集成传感,通信,计算和智能,从而提高6G网络的效率,有效性,隐私和安全性。在本文中,我们将为我们的愿景提供具有无线通信策略和分散机器学习模型的集成设计的可扩展和值得信赖的Edge AI系统。将描述新设计无线网络,服务驱动资源分配优化方法,以及支持边缘AI的整体端到端系统架构。还讨论了标准化,软件和硬件平台和应用场景以促进边缘AI系统的工业化和商业化。
translated by 谷歌翻译
Stance detection models may tend to rely on dataset bias in the text part as a shortcut and thus fail to sufficiently learn the interaction between the targets and texts. Recent debiasing methods usually treated features learned by small models or big models at earlier steps as bias features and proposed to exclude the branch learning those bias features during inference. However, most of these methods fail to disentangle the ``good'' stance features and ``bad'' bias features in the text part. In this paper, we investigate how to mitigate dataset bias in stance detection. Motivated by causal effects, we leverage a novel counterfactual inference framework, which enables us to capture the dataset bias in the text part as the direct causal effect of the text on stances and reduce the dataset bias in the text part by subtracting the direct text effect from the total causal effect. We novelly model bias features as features that correlate with the stance labels but fail on intermediate stance reasoning subtasks and propose an adversarial bias learning module to model the bias more accurately. To verify whether our model could better model the interaction between texts and targets, we test our model on recently proposed test sets to evaluate the understanding of the task from various aspects. Experiments demonstrate that our proposed method (1) could better model the bias features, and (2) outperforms existing debiasing baselines on both the original dataset and most of the newly constructed test sets.
translated by 谷歌翻译
Text-based speech editing allows users to edit speech by intuitively cutting, copying, and pasting text to speed up the process of editing speech. In the previous work, CampNet (context-aware mask prediction network) is proposed to realize text-based speech editing, significantly improving the quality of edited speech. This paper aims at a new task: adding emotional effect to the editing speech during the text-based speech editing to make the generated speech more expressive. To achieve this task, we propose Emo-CampNet (emotion CampNet), which can provide the option of emotional attributes for the generated speech in text-based speech editing and has the one-shot ability to edit unseen speakers' speech. Firstly, we propose an end-to-end emotion-selectable text-based speech editing model. The key idea of the model is to control the emotion of generated speech by introducing additional emotion attributes based on the context-aware mask prediction network. Secondly, to prevent the emotion of the generated speech from being interfered by the emotional components in the original speech, a neutral content generator is proposed to remove the emotion from the original speech, which is optimized by the generative adversarial framework. Thirdly, two data augmentation methods are proposed to enrich the emotional and pronunciation information in the training set, which can enable the model to edit the unseen speaker's speech. The experimental results that 1) Emo-CampNet can effectively control the emotion of the generated speech in the process of text-based speech editing; And can edit unseen speakers' speech. 2) Detailed ablation experiments further prove the effectiveness of emotional selectivity and data augmentation methods. The demo page is available at https://hairuo55.github.io/Emo-CampNet/
translated by 谷歌翻译
Large-scale cross-modal pre-training paradigms have recently shown ubiquitous success on a wide range of downstream tasks, e.g., zero-shot classification, retrieval and image captioning. However, their successes highly rely on the scale and quality of web-crawled data that naturally contain incomplete and noisy information (e.g., wrong or irrelevant content). Existing works either design manual rules to clean data or generate pseudo-targets as auxiliary signals for reducing noise impact, which do not explicitly tackle both the incorrect and incomplete challenges simultaneously. In this paper, to automatically mitigate the impact of noise by solely mining over existing data, we propose a principled Noise-robust Language-Image Pre-training framework (NLIP) to stabilize pre-training via two schemes: noise-harmonization and noise-completion. First, in noise-harmonization scheme, NLIP estimates the noise probability of each pair according to the memorization effect of cross-modal transformers, then adopts noise-adaptive regularization to harmonize the cross-modal alignments with varying degrees. Second, in noise-completion scheme, to enrich the missing object information of text, NLIP injects a concept-conditioned cross-modal decoder to obtain semantic-consistent synthetic captions to complete noisy ones, which uses the retrieved visual concepts (i.e., objects' names) for the corresponding image to guide captioning generation. By collaboratively optimizing noise-harmonization and noise-completion schemes, our NLIP can alleviate the common noise effects during image-text pre-training in a more efficient way. Extensive experiments show the significant performance improvements of our NLIP using only 26M data over existing pre-trained models (e.g., CLIP, FILIP and BLIP) on 12 zero-shot classification datasets, MSCOCO image captioning and zero-shot image-text retrieval tasks.
translated by 谷歌翻译
Text Summarization is recognised as one of the NLP downstream tasks and it has been extensively investigated in recent years. It can assist people with perceiving the information rapidly from the Internet, including news articles, social posts, videos, etc. Most existing research works attempt to develop summarization models to produce a better output. However, advent limitations of most existing models emerge, including unfaithfulness and factual errors. In this paper, we propose a novel model, named as Knowledge-aware Abstractive Text Summarization, which leverages the advantages offered by Knowledge Graph to enhance the standard Seq2Seq model. On top of that, the Knowledge Graph triplets are extracted from the source text and utilised to provide keywords with relational information, producing coherent and factually errorless summaries. We conduct extensive experiments by using real-world data sets. The results reveal that the proposed framework can effectively utilise the information from Knowledge Graph and significantly reduce the factual errors in the summary.
translated by 谷歌翻译
Dunhuang murals are a collection of Chinese style and national style, forming a self-contained Chinese-style Buddhist art. It has very high historical and cultural value and research significance. Among them, the lines of Dunhuang murals are highly general and expressive. It reflects the character's distinctive character and complex inner emotions. Therefore, the outline drawing of murals is of great significance to the research of Dunhuang Culture. The contour generation of Dunhuang murals belongs to image edge detection, which is an important branch of computer vision, aims to extract salient contour information in images. Although convolution-based deep learning networks have achieved good results in image edge extraction by exploring the contextual and semantic features of images. However, with the enlargement of the receptive field, some local detail information is lost. This makes it impossible for them to generate reasonable outline drawings of murals. In this paper, we propose a novel edge detector based on self-attention combined with convolution to generate line drawings of Dunhuang murals. Compared with existing edge detection methods, firstly, a new residual self-attention and convolution mixed module (Ramix) is proposed to fuse local and global features in feature maps. Secondly, a novel densely connected backbone extraction network is designed to efficiently propagate rich edge feature information from shallow layers into deep layers. Compared with existing methods, it is shown on different public datasets that our method is able to generate sharper and richer edge maps. In addition, testing on the Dunhuang mural dataset shows that our method can achieve very competitive performance.
translated by 谷歌翻译